6 research outputs found

    Evaluation of haptic guidance virtual fixtures and 3D visualization methods in telemanipulation—a user study

    Get PDF
    © 2019, The Author(s). This work presents a user-study evaluation of various visual and haptic feedback modes on a real telemanipulation platform. Of particular interest is the potential for haptic guidance virtual fixtures and 3D-mapping techniques to enhance efficiency and awareness in a simple teleoperated valve turn task. An RGB-Depth camera is used to gather real-time color and geometric data of the remote scene, and the operator is presented with either a monocular color video stream, a 3D-mapping voxel representation of the remote scene, or the ability to place a haptic guidance virtual fixture to help complete the telemanipulation task. The efficacy of the feedback modes is then explored experimentally through a user study, and the different modes are compared on the basis of objective and subjective metrics. Despite the simplistic task and numerous evaluation metrics, results show that the haptic virtual fixture resulted in significantly better collision avoidance compared to 3D visualization alone. Anticipated performance enhancements were also observed moving from 2D to 3D visualization. Remaining comparisons lead to exploratory inferences that inform future direction for focused and statistically significant studies

    Surgical Tool Segmentation with Pose-Informed Morphological Polar Transform of Endoscopic Images

    Get PDF
    This paper presents a tool-pose-informed variable center morphological polar transform to enhance segmentation of endoscopic images. The representation, while not loss-less, transforms rigid tool shapes into morphologies consistently more rectangular that may be more amenable to image segmentation networks. The proposed method was evaluated using the U-Net convolutional neural network, and the input images from endoscopy were represented in one of the four different coordinate formats (1) the original rectangular image representation, (2) the morphological polar coordinate transform, (3) the proposed variable center transform about the tool-tip pixel and (4) the proposed variable center transform about the tool vanishing point pixel. Previous work relied on the observations that endoscopic images typically exhibit unused border regions with content in the shape of a circle (since the image sensor is designed to be larger than the image circle to maximize available visual information in the constrained environment) and that the region of interest (ROI) was most ideally near the endoscopic image center. That work sought an intelligent method for, given an input image, carefully selecting between methods (1) and (2) for best image segmentation prediction. In this extension, the image center reference constraint for polar transformation in method (2) is relaxed via the development of a variable center morphological transformation. Transform center selection leads to different spatial distributions of image loss, and the transform-center location can be informed by robot kinematic model and endoscopic image data. In particular, this work is examined using the tool-tip and tool vanishing point on the image plane as candidate centers. The experiments were conducted for each of the four image representations using a data set of 8360 endoscopic images from real sinus surgery. The segmentation performance was evaluated with standard metrics, and some insight about loss and tool location effects on performance are provided. Overall, the results are promising, showing that selecting a transform center based on tool shape features using the proposed method can improve segmentation performance

    Telelocomotion—remotely operated legged robots

    Get PDF
    © 2020 by the authors. Li-censee MDPI, Basel, Switzerland. Teleoperated systems enable human control of robotic proxies and are particularly amenable to inaccessible environments unsuitable for autonomy. Examples include emergency response, underwater manipulation, and robot assisted minimally invasive surgery. However, teleoperation architectures have been predominantly employed in manipulation tasks, and are thus only useful when the robot is within reach of the task. This work introduces the idea of extending teleoperation to enable online human remote control of legged robots, or telelocomotion, to traverse challenging terrain. Traversing unpredictable terrain remains a challenge for autonomous legged locomotion, as demonstrated by robots commonly falling in high-profile robotics contests. Telelocomotion can reduce the risk of mission failure by leveraging the high-level understanding of human operators to command in real-time the gaits of legged robots. In this work, a haptic telelocomotion interface was developed. Two within-user studies validate the proof-of-concept interface: (i) The first compared basic interfaces with the haptic interface for control of a simulated hexapedal robot in various levels of traversal complexity; (ii) the second presents a physical implementation and investigated the efficacy of the proposed haptic virtual fixtures. Results are promising to the use of haptic feedback for telelocomotion for complex traversal tasks

    Chaos in nonlinear random walks with nonmonotonic transition probabilities

    No full text
    Random walks serve as important tools for studying complex network structures, yet their dynamics in cases where transition probabilities are not static remain under explored and poorly understood. Here we study nonlinear random walks that occur when transition probabilities depend on the state of the system. We show that when these transition probabilities are nonmonotonic, i.e., are not uniformly biased towards the most densely or sparsely populated nodes, but rather direct random walkers with more nuance, chaotic dynamics emerge. Using multiple transition probability functions and a range of networks with different connectivity properties, we demonstrate that this phenomenon is generic. Thus, when such nonmonotonic properties are key ingredients in nonlinear transport applications complicated and unpredictable behaviors may result

    Local style preservation in improved GAN-driven synthetic image generation for endoscopic tool segmentation

    Get PDF
    Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets for robust performance are either costly or unavailable. This work examines three novel generative adversarial network (GAN) methods of providing usable synthetic tool images using only surgical background images and a few real tool images. The best of these three novel approaches generates realistic tool textures while preserving local background content by incorporating both a style preservation and a content loss component into the proposed multi-level loss function. The approach is quantitatively evaluated, and results suggest that the synthetically generated training tool images enhance UNet tool segmentation performance. More specifically, with a random set of 100 cadaver and live endoscopic images from the University of Washington Sinus Dataset, the UNet trained with synthetically generated images using the presented method resulted in 35.7% and 30.6% improvement over using purely real images in mean Dice coefficient and Intersection over Union scores, respectively. This study is promising towards the use of more widely available and routine screening endoscopy to preoperatively generate synthetic training tool images for intraoperative UNet tool segmentation

    Telelocomotion—Remotely Operated Legged Robots

    No full text
    Teleoperated systems enable human control of robotic proxies and are particularly amenable to inaccessible environments unsuitable for autonomy. Examples include emergency response, underwater manipulation, and robot assisted minimally invasive surgery. However, teleoperation architectures have been predominantly employed in manipulation tasks, and are thus only useful when the robot is within reach of the task. This work introduces the idea of extending teleoperation to enable online human remote control of legged robots, or telelocomotion, to traverse challenging terrain. Traversing unpredictable terrain remains a challenge for autonomous legged locomotion, as demonstrated by robots commonly falling in high-profile robotics contests. Telelocomotion can reduce the risk of mission failure by leveraging the high-level understanding of human operators to command in real-time the gaits of legged robots. In this work, a haptic telelocomotion interface was developed. Two within-user studies validate the proof-of-concept interface: (i) The first compared basic interfaces with the haptic interface for control of a simulated hexapedal robot in various levels of traversal complexity; (ii) the second presents a physical implementation and investigated the efficacy of the proposed haptic virtual fixtures. Results are promising to the use of haptic feedback for telelocomotion for complex traversal tasks
    corecore